12 research outputs found

    Brain Tumor Detection and Localization: An Inception V3 - Based Classification Followed By RESUNET-Based Segmentation Approach

    Get PDF
    Adults and children alike are at risk from brain tumors. Accurate and prompt detection, on the other hand, can save lives. This research focuses on the identification and localization of brain tumors. Many research has been available on the analysis and classification of brain tumors, but only a few have addressed the issue of feature engineering. To address the difficulties of manual diagnostics and traditional feature-engineering procedures, new methods are required. To reliably segment and identify brain tumors, an automated diagnostic method is required. While progress is being made, automated brain tumor diagnosis still confront hurdles such as low accuracy and a high rate of false-positive outcomes. Deep learning is used to analyse brain tumors in the model described in this work, which improves classification and segmentation. Using Inception-V3 and RESUNET, deep learning is pragmatic for tumor classification and segmentation. On the Inception V3 model, add one extra layer as a head for classifying. The outcomes of these procedures are compared to those of existing methods. The test accuracy of the Inception-V3 with extra classification layer model is 0.9996, while the loss value is 0.0025. The model tversky value for localization and detection is 0.9688, while the model accuracy is 0.9700

    Enhancing Semantic Segmentation: Design and Analysis of Improved U-Net Based Deep Convolutional Neural Networks

    Get PDF
    In this research, we provide a state-of-the-art method for semantic segmentation that makes use of a modified version of the U-Net architecture, which is itself based on deep convolutional neural networks (CNNs). This research delves into the ins and outs of this cutting-edge approach to semantic segmentation in an effort to boost its precision and productivity. To perform semantic segmentation, a crucial operation in computer vision, each pixel in an image must be assigned to one of many predefined item classes. The proposed Improved U-Net architecture makes use of deep CNNs to efficiently capture complex spatial characteristics while preserving associated context. The study illustrates the efficacy of the Improved U-Net in a variety of real-world circumstances through thorough experimentation and assessment. Intricate feature extraction, down-sampling, and up-sampling are all part of the network's design in order to produce high-quality segmentation results. The study demonstrates comparative evaluations against classic U-Net and other state-of-the-art models and emphasizes the significance of hyperparameter fine-tuning. The suggested architecture shows excellent performance in terms of accuracy and generalization, demonstrating its promise for a variety of applications. Finally, the problem of semantic segmentation is addressed in a novel way. The experimental findings validate the relevance of the architecture's design decisions and demonstrate its potential to boost computer vision by enhancing segmentation precision and efficiency

    TESTING RESOURCE ALLOCATION FOR MODULAR SOFTWARE USING GENETIC ALGORITHM

    No full text
    Software testing is one of the important steps of SDLC. In software testing one of the important issues is how to allocate the limited resources so that we finish our testing on time and will deliver quality software. Number of Software Reliability Growth Models (SRGM) has been developed for allocating the testing resource in the past three decades but majority of models are developed in static environment. In this paper we developed model in a dynamic environment and also the software is divided into different modules. We also used Pontryagin Maximum principle for solving the model. At last one numerical example is solved for allocating the resource for a given module. For allocating resource optimally we used Genetic Algorithm (GA). GA is used as a powerful tool for solving search & optimization kind of problems

    Computational intelligence and its applications in healthcare

    No full text

    Pathology for gastrointestinal and hepatobiliary cancers using artificial intelligence

    No full text
    From visual data, artificial intelligence (AI) can extract complicated information. Histopathology pictures of gastrointestinal (GI) and liver cancer provide a large quantity of data that human observers can only decipher in part. AI permits the in-depth study of digitized histological slides of GI and liver cancer, complementing human observers, and has a wide variety of clinically useful applications. First, AI can recognize tumor tissue automatically, alleviating pathologists' ever-increasing labor. Furthermore, AI can capture prognostically significant tissue characteristics and so predict clinical prognosis across GI and liver cancer types, perhaps surpassing pathologists' capabilities

    Pathology for Gastrointestinal and Hepatobiliary Cancers Using Artificial Intelligence

    Full text link
    From visual data, artificial intelligence (AI) can extract complicated information. Histopathology pictures of gastrointestinal (GI) and liver cancer provide a large quantity of data that human observers can only decipher in part. AI permits the in-depth study of digitized histological slides of GI and liver cancer, complementing human observers, and has a wide variety of clinically useful applications. First, AI can recognize tumor tissue automatically, alleviating pathologists' ever-increasing labor. Furthermore, AI can capture prognostically significant tissue characteristics and so predict clinical prognosis across GI and liver cancer types, perhaps surpassing pathologists' capabilities
    corecore